495 research outputs found

    Contribution of the frontal lobe to externally and internally specified verbal responses : fMRI evidence

    Get PDF
    It has been suggested that within the frontal cortex there is a lateral to medial shift in the control of action, with the lateral premotor area (PMA) involved in externally specified actions and the medial supplementary motor areas (SMA) involved in internally specified actions. Recent brain imaging studies demonstrate, however, that the control of externally and internally specified actions may involve more complex and overlapping networks involving not only the PMA and the SMA, but also the pre-SMA and the lateral prefrontal cortex (PFC). The aim of the present study was to determine whether these frontal regions are differentially involved in the production of verbal responses, when they are externally specified and when they are internally specified. Participants engaged in three overt speaking tasks in which the degree of response specification differed. The tasks involved reading aloud words (externally specified), or generating words aloud from narrow or broad semantic categories (internally specified). Using fMRI, the location and magnitude of the BOLD activity for these tasks was measured in a group of ten participants. Compared with rest, all tasks activated the primary motor area and the SMA-proper, reflecting their common role in speech production. The magnitude of the activity in the PFC (Brodmann area 45), the left PMAv and the pre-SMA increased for word generation, suggesting that each of these three regions plays a role in internally specified action selection. This confirms previous reports concerning the participation of the pre-SMA in verbal response selection. The pattern of activity in PMAv suggests participation in both externally and internally specified verbal actions

    Contribution of the pre-SMA to the production of words and non-speech oral motor gestures, as revealed by repetitive transcranial magnetic stimulation (rTMS)

    Get PDF
    An emerging theoretical perspective, largely based on neuroimaging studies, suggests that the pre-SMA is involved in planning cognitive aspects of motor behavior and language, such as linguistic and non-linguistic response selection. Neuroimaging studies, however, cannot indicate whether a brain region is equally important to all tasks in which it is activated. In the present study, we tested the hypothesis that the pre-SMA is an important component of response selection, using an interference technique. High frequency repetitive TMS (10 Hz) was used to interfere with the functioning of the pre-SMA during tasks requiring selection of words and oral gestures under different selection modes (forced, volitional) and attention levels (high attention, low attention). Results show that TMS applied to the pre-SMA interferes selectively with the volitional selection condition, resulting in longer RTs. The low- and high-attention forced selection conditions were unaffected by TMS, demonstrating that the pre-SMA is sensitive to selection mode but not attentional demands. TMS similarly affected the volitional selection of words and oral gestures, reflecting the response- independent nature of the pre-SMA contribution to response selection. The implications of these results are discussed

    On the selection of words and oral motor responses : evidence of a response-independent fronto-parietal network

    Get PDF
    Several brain areas including the medial and lateral premotor areas, and the prefrontal cortex, are thought to be involved in response selection. It is unclear, however, what the specific contribution of each of these areas is. It is also unclear whether the response selection process operates independent of response modality or whether a number of specialized processes are recruited depending on the behaviour of interest. In the present study, the neural substrates for different response selection modes (volitional and stim- ulus-driven) were compared, using sparse-sampling functional magnetic resonance imaging, for two different response modalities: words and comparable oral motor gestures. Results demonstrate that response selection relies on a network of prefrontal, premotor and parietal areas, with the pre-supplementary motor area (pre-SMA) at the core of the process. Overall, this network is sensitive to the manner in which responses are selected, despite the absence of a medio-lateral axis, as was suggested by Goldberg (1985). In contrast, this network shows little sensitivity to the modality of the response, suggesting of a domain-general selection process. Theoretical implications of these results are discussed

    Somatosensory Event-related Potentials from Orofacial Skin Stretch Stimulation

    No full text
    International audienceCortical processing associated with orofacial somatosensory function in speech has received limited experimental attention due to the difficulty of providing precise and controlled stimulation. This article introduces a technique for recording somatosensory event-related potentials (ERP) that uses a novel mechanical stimulation method involving skin deformation using a robotic device. Controlled deformation of the facial skin is used to modulate kinesthetic inputs through excitation of cutaneous mechanoreceptors. By combining somatosensory stimulation with electroencephalographic recording, somatosensory evoked responses can be successfully measured at the level of the cortex. Somatosensory stimulation can be combined with the stimulation of other sensory modalities to assess multisensory interactions. For speech, orofacial stimulation is combined with speech sound stimulation to assess the contribution of multi-sensory processing including the effects of timing differences. The ability to precisely control orofacial somatosensory stimulation during speech perception and speech production with ERP recording is an important tool that provides new insight into the neural organization and neural representations for speech. Video Link The video component of this article can be found at http://www.jove.com/video/53621

    Neural correlates of auditory-somatosensory interaction in speech perception

    No full text
    International audienceSpeech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009). In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous) and asynchronous (90 ms lag and lead) somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production

    Imaging speech production using fMRI

    Get PDF
    Human speech is a well-learned, sensorimotor, and ecological behavior ideal for the study of neural processes and brain-behavior relations. With the advent of modern neuroimaging techniques such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), the potential for investigating neural mechanisms of speech motor control, speech motor disorders, and speech motor development has increased. However, a practical issue has limited the application of fMRI to issues in spoken language production and other related behaviors (singing, swallowing). Producing these behaviors during volume acquisition introduces motion-induced signal changes that confound the activation signals of interest. A number of approaches, ranging from signal processing to using silent or covert speech, have attempted to remove or prevent the effects of motioninduced artefact. However, these approaches are flawed for a variety of reasons. An alternative approach, that has only recently been applied to study single-word production, uses pauses in volume acquisition during the production of natural speech motion. Here we present some representative data illustrating the problems associated with motion artefacts and some qualitative results acquired from subjects producing short sentences and orofacial nonspeech movements in the scanner. Using pauses or silent intervals in volume acquisition and block designs, results from individual subjects result in robust activation without motion-induced signal artefact. This approach is an efficient method for studying the neural basis of spoken language production and the effects of speech and language disorders using fMRI

    Reverberation limits the release from informational masking obtained in the harmonic and binaural domains

    Get PDF
    A difference in fundamental frequency (ΔF0) and a difference in spatial location (ΔSL) are two cues known to provide masking releases when multiple speakers talk at once in a room. Situations were examined in which reverberation should have no effect on the mechanisms underlying the release from energetic masking produced by these two cues. Speech reception thresholds using both unpredictable target sentences and the coordinate response measure followed a similar pattern. Both ΔF0s and ΔSLs provided masking releases in the presence of non-speech maskers (matched in excitation pattern and temporal envelope to speech maskers) which, as intended, were robust to reverberation. Larger masking releases were obtained for speech maskers, but critically, they were affected by reverberation. The results suggest that reverberation either limits the amount of informational masking there is to begin with, or affects its release by ΔF0s or ΔSLs

    Auditory-Motor Learning during Speech Production in 9-11-Year-Old Children

    Get PDF
    BACKGROUND: Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children. METHODOLOGY/PRINCIPAL FINDINGS: In the present study, we manipulated auditory feedback during speech production in a group of 9-11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations. CONCLUSIONS: The results indicate that 9-11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children's perceptual representations of speech sound categories

    Activation in Right Dorsolateral Prefrontal Cortex Underlies Stuttering Anticipation

    Get PDF
    People who stutter learn to anticipate many of their overt stuttering events. Despite the critical role of anticipation, particularly how responses to anticipation shape stuttering behaviors, the neural bases associated with anticipation are unknown. We used a novel approach to identify anticipated and unanticipated words in 22 adult stutterers, which were produced in a delayed-response task while hemodynamic activity was measured using functional near infrared spectroscopy (fNIRS). Twenty-two control participants were included such that each individualized set of anticipated/unanticipated words was produced by one stutterer and one control. We conducted an analysis on the right dorsolateral prefrontal cortex (R-DLPFC) based on converging lines of evidence from the stuttering and cognitive control literatures. We also assessed connectivity between the R-DLPFC and right supramarginal gyrus (R-SMG), two key nodes of the frontoparietal network (FPN), to assess the role of cognitive control, particularly error-likelihood monitoring, in stuttering anticipation. All analyses focused on the five-second anticipation phase preceding the go signal to produce speech. Results indicate that anticipated words are associated with elevated activation in the R-DLPFC, and that compared to non-stutterers, stutterers exhibit greater activity in the R-DLPFC, irrespective of anticipation. Further, anticipated words are associated with reduced connectivity between the R-DLPFC and R-SMG. These findings highlight the potential roles of the R-DLPFC and the greater FPN as a neural substrate of stuttering anticipation. The results also support previous accounts of error-likelihood monitoring and action-stopping in stuttering anticipation. Overall, this work offers numerous directions for future research with clinical implications for targeted neuromodulation
    • …
    corecore